This is the third and final article in the short series of articles meant for copilots - ‘Making challenge specifications better qualitatively’. If you haven’t read the first two articles, please do so before reading this. Links to the other articles are at the bottom of the page.
The goal of this series and this article is to ensure that the best practices in writing quality specifications are adopted consistently and widely, and all the copilots are on the same page in terms of the parameters they should ideally look to optimise.
A basic qualitative template on how to structure the Challenge Specification – The following is not exactly a template, but an indication of what sections should ideally be included in the specification. This in concert with other considerations made in the previous articles, can allow one to come up with specifications that potentially improve the overall challenge experience.
The sections to be included are:
1. An attractive and descriptive heading – pretty self-explanatory. The goal here is to attract the attention of as many potential participants as possible to at least explore the challenge.
2. A section named Context, with two sub-headings:
Project Context – briefly discuss what the overall goal of the entire project is.
Challenge Context – briefly discuss the overall goal of this particular challenge.
3. A section named - Expected outcome Under this section, it should be clearly discussed (without going into individual details of the challenge yet) what the outcome of this challenge needs to be, and some indication of how this outcome fits in the context of the entire project.
4. A section named Challenge Details - under this section, discuss all the remaining challenge details that one usually would, with suitable sub-sections based on the challenge’s individual requirements.
Note – an attempt should be made to avoid writing large paragraphs unless necessary, and in case large sections are required, they should ideally be broken into lists and further well thought out sub-sections.
Furthermore, use of images/illustrations are encouraged. An example of a well-crafted specification would be that of Singlecell - Trajectory Inference Methods marathon match (It might be a good idea to bookmark this challenge for future reference).
Although this specification doesn’t contain the sections discussed in this article, this challenge specification is a great example of excellent use of illustrations as well as good use of sub-sections and concise and to-the-point text. An attempt should be made to use similar formatting for the text under the ‘Challenge Details’ section. Use of this kind of proper formatting can potentially help deliver an overall aesthetically-pleasing presentation of the challenge’s specification.
5. A section named Scorecard Aid - The goal of this section is to inform the members as well as the reviewers how to use the scorecard to review the challenge. At times, although copilots and PMs are clear about what areas of a submission should be prioritised during the review, the same kind of clarity is not available to the reviewers or the members. If such information is made available beforehand, it would allow members to prioritise things better in their submission, and will also allow reviewers to review everything more consistently and in alignment with the copilot’s and PM’s intentions, which in turn would also help avoid post-review conflicts.
The copilots should write this section carefully, clearly communicating what areas of the submission will be of the highest significance. In addition, the copilots have the authority to provide whatever additional details they find suitable in order to make the use of the scorecard clearer. Some examples of what to include in the ‘Scorecard Aid’ section:
a. If the standard dev scorecard is being used, it would help to describe which areas of the requirements would be generally treated as major, and which areas minor. In case the copilot desires the major/minor differentiation in review in some other reasonable manner, they are encouraged to communicate and clarify it under this section.
b. For other kinds of review scorecards, say a subjective scorecard, clear instructions should be provided under this section to ensure that everyone is on the same page regarding the areas to be prioritised in submission and in review.
6. In addition to the sections above, any additional sections as required can be added, while maintaining the specification’s general flow of abstraction from the big picture to the nitty-gritty details, as discussed in the previous article in this series LINK.
Ideally, all the sections mentioned above should be included in every challenge. Nonetheless, suitable additions and changes can be made in exceptional cases if required. While maintaining creative freedom, in general it should be ensured that the sections should be included while composing the challenge specifications.
To reiterate, our end goal is to attain and maintain a state where the experience of the competitor is great and challenges are run in a professional manner. The Challenge Operations Team looks forward to your participation in making competing at Topcoder one of the most pleasurable work experiences for the crowdsourcing community!